Goto

Collaborating Authors

 analog memory


IBM researchers design a fast, power-efficient chip for AI training

#artificialintelligence

Thanks to powerful graphics chips and advances in distributed computing, optimizing the algorithms at the core of artificial intelligence is easier than ever before. But it's not particularly efficient on current-day hardware -- even powerful GPUs can take days or weeks to train a neural network. That catalyzed researchers at IBM to develop a new chip tailor-made for AI training. In a paper published in the journal Nature titled "Equivalent-accuracy accelerated neural-network training using analog memory," they describe a system of transistors and capacitors that can train neural networks quickly, precisely, and highly energy-efficiently. Neural networks consist of interconnected units called neurons or nodes (a collection of nodes is called a layer), which receive numerical inputs. In a basic network, individual neurons multiply those inputs by a value -- a weight -- and pass them along to an activation function, which defines the output of the node.